7 research outputs found
Evaluation of state-of-the-art segmentation algorithms for left ventricle infarct from late Gadolinium enhancement MR images
Studies have demonstrated the feasibility of late Gadolinium enhancement (LGE) cardiovascular magnetic
resonance (CMR) imaging for guiding the management of patients with sequelae to myocardial infarction,
such as ventricular tachycardia and heart failure. Clinical implementation of these developments necessitates
a reproducible and reliable segmentation of the infarcted regions. It is challenging to compare
new algorithms for infarct segmentation in the left ventricle (LV) with existing algorithms. Benchmarking
datasets with evaluation strategies are much needed to facilitate comparison. This manuscript presents
a benchmarking evaluation framework for future algorithms that segment infarct from LGE CMR of the
LV. The image database consists of 30 LGE CMR images of both humans and pigs that were acquired
from two separate imaging centres. A consensus ground truth was obtained for all data using maximum
likelihood estimation.
Six widely-used fixed-thresholding methods and five recently developed algorithms are tested on the
benchmarking framework. Results demonstrate that the algorithms have better overlap with the consensus
ground truth than most of the n-SD fixed-thresholding methods, with the exception of the FullWidth-at-Half-Maximum
(FWHM) fixed-thresholding method. Some of the pitfalls of fixed thresholding
methods are demonstrated in this work. The benchmarking evaluation framework, which is a contribution
of this work, can be used to test and benchmark future algorithms that detect and quantify infarct
in LGE CMR images of the LV. The datasets, ground truth and evaluation code have been made publicly
available through the website: https://www.cardiacatlas.org/web/guest/challenges
Predicting Response to Neoadjuvant Chemotherapy with PET Imaging Using Convolutional Neural Networks.
Imaging of cancer with 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) has become a standard component of diagnosis and staging in oncology, and is becoming more important as a quantitative monitor of individual response to therapy. In this article we investigate the challenging problem of predicting a patient's response to neoadjuvant chemotherapy from a single 18F-FDG PET scan taken prior to treatment. We take a "radiomics" approach whereby a large amount of quantitative features is automatically extracted from pretherapy PET images in order to build a comprehensive quantification of the tumor phenotype. While the dominant methodology relies on hand-crafted texture features, we explore the potential of automatically learning low- to high-level features directly from PET scans. We report on a study that compares the performance of two competing radiomics strategies: an approach based on state-of-the-art statistical classifiers using over 100 quantitative imaging descriptors, including texture features as well as standardized uptake values, and a convolutional neural network, 3S-CNN, trained directly from PET scans by taking sets of adjacent intra-tumor slices. Our experimental results, based on a sample of 107 patients with esophageal cancer, provide initial evidence that convolutional neural networks have the potential to extract PET imaging representations that are highly predictive of response to therapy. On this dataset, 3S-CNN achieves an average 80.7% sensitivity and 81.6% specificity in predicting non-responders, and outperforms other competing predictive models
Classification results: each figure is the average of three independent experiments using different training and test datasets.
<p>Classification results: each figure is the average of three independent experiments using different training and test datasets.</p
Kaplan-Meier plot showing the survival rates of responders and non-responders.
<p>Kaplan-Meier plot showing the survival rates of responders and non-responders.</p
Examples of feature maps in the first and last max-pooling layers <b>V</b><sup>(1)</sup><b>,</b><b>V</b><sup>(4)</sup> of the CNN architecture.
<p>The feature maps illustrate how a specific triplet is represented in the first and last max-pooling layers.</p
Distributions of axial <sup>18</sup>F-FDG PET intra slices extracted from the 3D tumor volume of non-responders and responders.
<p>Distributions of axial <sup>18</sup>F-FDG PET intra slices extracted from the 3D tumor volume of non-responders and responders.</p
<sup>18</sup>F-FDG PET ROIs of a specific tumor <i>i</i> after segmentation embedded into larger square background of standard size of 100 Ă— 100 pixels.
<p>Each enlarged slice is denoted by <b>x</b><sub><i>i,j</i></sub> and each set of three spatially adjacent enlarged slides is denoted by <b>z</b><sub><i>i,k</i></sub>, where <i>j</i> and <i>k</i> represent the slices and triplets of the specific tumor <i>i</i>. In this example only 3 triplets, from the 5 available slices can be formed, so <i>k</i> = 1,2,3.</p